Conservation law for self-paced movements
نویسندگان
چکیده
منابع مشابه
Conservation law for self-paced movements.
Optimal control models of biological movements introduce external task factors to specify the pace of movements. Here, we present the dual to the principle of optimality based on a conserved quantity, called "drive," that represents the influence of internal motivation level on movement pace. Optimal control and drive conservation provide equivalent descriptions for the regularities observed wi...
متن کاملSelf-Paced Curriculum Learning
Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by the learning process of humans and animals that gradually proceeds from easy to more complex samples in training. The two methods share a similar conceptual learning paradigm, but differ in specific learning schemes. In CL, the curriculum is predetermined by prior knowledge, and rema...
متن کاملSelf-Paced Co-training
Notation and Definition: We assume that examples are drawn from some distributions D over an instance space X = X1 × X2, where X1 and X2 correspond to two different “views” of examples. Let c denote the target function, and let X and X− (for simplicity we assume we are doing binary classification) denote the positive and negative regions of X , respectively . For i ∈ 1, 2, let X i = {xj ∈ Xi : ...
متن کاملMulti-view Self-Paced Learning for Clustering
Exploiting the information from multiple views can improve clustering accuracy. However, most existing multi-view clustering algorithms are nonconvex and are thus prone to becoming stuck into bad local minima, especially when there are outliers and missing data. To overcome this problem, we present a new multi-view self-paced learning (MSPL) algorithm for clustering, that learns the multi-view ...
متن کاملSelf-Paced Learning for Latent Variable Models
Latent variable models are a powerful tool for addressing several tasks in machine learning. However, the algorithms for learning the parameters of latent variable models are prone to getting stuck in a bad local optimum. To alleviate this problem, we build on the intuition that, rather than considering all samples simultaneously, the algorithm should be presented with the training data in a me...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the National Academy of Sciences
سال: 2016
ISSN: 0027-8424,1091-6490
DOI: 10.1073/pnas.1608724113